Knowledge-Intensive Language Understanding for Explainable AI

نویسندگان

چکیده

AI systems have seen significant adoption in various domains. At the same time, further some domains is hindered by inability to fully trust an system that it will not harm a human. Besides, fairness, privacy, transparency, and explainability are vital developing systems. As stated Describing Trustworthy AI,aa.https://www.ibm.com/watson/trustworthy-ai. “Trust comes through understanding. How AI-led decisions made what determining factors were included crucial understand.” The subarea of explaining has come be known as XAI. Multiple aspects can explained; these include biases data might have, lack points particular region example space, fairness gathering data, feature importances, etc. However, besides these, critical human-centered explanations directly related decision-making, similar how domain expert makes based on “domain knowledge,” including well-established, peer-validated explicit guidelines. To understand validate system's outcomes (such classification, recommendations, predictions) lead system, necessary involve knowledge humans use. Contemporary XAI methods yet addressed enable decision-making expert. Figure 1 shows stages into real world.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Transparent, explainable, and accountable AI for robotics

Recent governmental statements from the United States (USA) (1, 2), the European Union (EU) (3), and China (4) identify artificial intelligence (AI) and robotics as economic and policy priorities. Despite this enthusiasm, chal lenges remain. Systems can make unfair and discriminatory decisions, replicate or develop biases, and behave in inscrutable and unexpected ways in highly sensitive enviro...

متن کامل

Semantically-based priors and nuanced knowledge core for Big Data, Social AI, and language understanding

Noise-resistant and nuanced, COGBASE makes 10 million pieces of commonsense data and a host of novel reasoning algorithms available via a family of semantically-driven prior probability distributions. Machine learning, Big Data, natural language understanding/processing, and social AI can draw on COGBASE to determine lexical semantics, infer goals and interests, simulate emotion and affect, cal...

متن کامل

Logic meets Probability: Towards Explainable AI Systems for Uncertain Worlds

Logical AI is concerned with formal languages to represent and reason with qualitative specifications; statistical AI is concerned with learning quantitative specifications from data. To combine the strengths of these two camps, there has been exciting recent progress on unifying logic and probability. We review the many guises for this union, while emphasizing the need for a formal language to...

متن کامل

Enriching Knowledge Sources for Natural Language Understanding

This paper presents the complete and consistent ontological annotation of the nominal part of WordNet. The annotation has been carried out using the semantic features defined in the EuroWordNet Top Concept Ontology and made available to the NLP community. Up to now only an initial core set of 1,024 synsets, the so-called Base Concepts, was ontologized in such a way. The work has been achieved b...

متن کامل

Linguistic Knowledge Sources for Spoken Language Understanding

The objective of the Unisys Spoken Language Systems effort is to develop and demonstra te technology for the understanding of goal-directed spontaneous speech. The Unisys spoken language architecture couples speech recognition systems with the Unisys discourse understanding system, PUNDIT. PUNDIT is a broad-coverage language understanding system used in a variety of message understanding applic...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Internet Computing

سال: 2021

ISSN: ['1089-7801', '1941-0131']

DOI: https://doi.org/10.1109/mic.2021.3101919